The term "Super Resolution" refers to the process of improving the quality of images by boosting its apparent resolution. In most cases, a trained super resolution model can transform images from low-resolution (LR) to high-resolution (HR) while maintaining clean edges and preserving important details.
Original image (left) and with super resolution applied (right)
The video below provides an overview of super resolution with deep learning.
Super Resolution with Deep Learning (29:46)
You also view this video and others in the In-Depth Lessons section on our website (https://www.theobjects.com/dragonfly/learn-recorded-webinars.html).
The following topics are discussed in the video.
The following items are required for training a Deep Learning model for super resolution:
Important Output images must be an exact multiple of input images, such as 2x, 4x, 10x, and so on.
A selection of untrained models suitable for regression are supplied with the Deep Learning Tool (see Deep Learning Architectures). You can also download models from the Infinite Toolbox (see Infinite Toolbox), or import models from Keras.
The following items are optional for training a Deep Learning model for super resolution:
To help monitor and evaluate the progress of training Deep Learning models, you can designate a 2D rectangular region for visual feedback. With the Visual Feedback option selected, the model’s inference will be displayed in the Training dialog in real time as each epoch is completed, as shown on the right of the screen capture below. In addition, you can create a checkpoint cache so that you can save a copy of the model at a selected checkpoint (see Enabling Checkpoint Caches and Loading and Saving Model Checkpoints). Saved checkpoints are marked in bold on the plotted graph, as shown below.
Training dialog
Note Any region that you define for visual feedback should not overlap the training data.

Note The visual feedback image for each epoch is saved during model training. You can review the result of each epoch by scrolling through the plotted graph. If the checkpoint cache is enabled, you can also save the model at a selected checkpoint when you review the training results (see Loading and Saving Model Checkpoints).
Dragonfly's Deep Learning Tool provides a number of architectures — including EDSR, U-Net, U-Net 3D, UNet++, and WDSR — that are suitable for super resolution.
The Deep Learning Tool dialog appears.
The Model Generator dialog appears (see Model Generator for additional information about the dialog).
This will filter the available architectures to those recommended for super resolution.

Note A description of each architecture is available in the Architecture description box, along with a link for more detailed information (see also ).

Note In most cases, super resolution strategies require an Input count of 1.

Note Make sure that you select the correct scale for EDSR and WDSR models, as shown below, which is the ratio of the input size to the output size. Output images must be an exact multiple of input images, such as 2x, 4x, 10x, and so on.
Note If you are using a U-Net model for super resolution, you will need to edit the model architecture and add layer(s) for scaling (see Model Editing Panel).
Note Refer to Editable Parameters for Deep Learning Architectures for information about the settings available in the Model Generator dialog.
After processing is complete, a confirmation message appears.
Information about the loaded model appears in the dialog (see Details), while a graph view of the data flow is available on the Model Editing panel (see Model Editing Panel).
You can start training a model for super resolution after you have prepared your training input(s) and output(s), as well as any required masks (see Prerequisites).
To open the Deep Learning Tool, choose Artificial Intelligence > Deep Learning Tool on the menu bar.
Information about the model appears in the Details box (see Details).
Note In most cases, you should be able to train a super resolution model supplied with the Deep Learning Tool as is, without making changes to its architecture.
The Model Training panel appears (see Model Training Panel).
Note If you chose to train your model in 3D, then additional options will appear for the input, as shown below. See Configuring Multi-Slice Inputs for information about selecting reference slices and spacing values.

Note If your model requires multiple inputs, select the additional input(s), as shown below.


Note If you are training with multiple training sets, click the Add New
button and then choose the required input(s), output, and mask for the additional item(s).

In most cases, you can deselect Generate additional training data by augmentation.
In most cases, you should increase the Patch size as much as possible. In addition, the MeanSquareError loss function usually provides good results.

See Basic Settings for information about choosing the patch size, stride ratio, batch size, epochs number, loss function, and optimization algorithm.
Note You should monitor the estimated memory ratio when you choose the training parameter settings. The ratio should not exceed 1.00 (see Estimated Memory Ratio).
You should note that this step is optional and that these settings can be adjusted after you have evaluated the initial training results.
You can monitor the progress of training in the Training dialog, which is shown below.
During training, the quantities 'loss' and 'val_loss' should decrease. You should continue to train until 'val_loss' stops decreasing. You can also select any of the other available metrics to monitor training progress.
Note You can also click the List tab and then review the precise values for each epoch.
Note The measure of a good super resolution model is how well edges and details are preserved while resolution is increased.
Note If your results continue to be unsatisfactory, you might consider choosing another architecture.